Gå til hovedinnhold

Interpretable AI

Machine learning models are only as interpretable as their features. Interpretable features are those that are meaningful to users. This means that the features used in the model should be easily understood and relevant to the problem at hand. The interpretability of a model is crucial as it allows users to understand the reasoning behind the predictions made by the model.

Explainable AI, is a difficult topic. Interpretable AI is a bit different, as it should actually be an actual scientific cause and effect. In contrast, I think explainable AI is more about finding a couple of features that have some impact and using them.

Links

Thoughts